Goto

Collaborating Authors

 hessian matrix


Differentially Private Truncation of Unbounded Data via Public Second Moments

Cao, Zilong, Bi, Xuan, Zhang, Hai

arXiv.org Machine Learning

Data privacy is important in the AI era, and differential privacy (DP) is one of the golden solutions. However, DP is typically applicable only if data have a bounded underlying distribution. We address this limitation by leveraging second-moment information from a small amount of public data. We propose Public-moment-guided Truncation (PMT), which transforms private data using the public second-moment matrix and applies a principled truncation whose radius depends only on non-private quantities: data dimension and sample size. This transformation yields a well-conditioned second-moment matrix, enabling its inversion with a significantly strengthened ability to resist the DP noise. Furthermore, we demonstrate the applicability of PMT by using penalized and generalized linear regressions. Specifically, we design new loss functions and algorithms, ensuring that solutions in the transformed space can be mapped back to the original domain. We have established improvements in the models' DP estimation through theoretical error bounds, robustness guarantees, and convergence results, attributing the gains to the conditioning effect of PMT. Experiments on synthetic and real datasets confirm that PMT substantially improves the accuracy and stability of DP models.



Boosting Adversarial Transferability by Achieving Flat Local Maxima

Neural Information Processing Systems

Specifically, we randomly sample an example and adopt a first-order procedure to approximate the Hessian/vector product, which makes computing more efficient by interpolating two neighboring gradients.


Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee Y uanshi Liu, Cong Fang, Tong Zhang School of Intelligence Science and Technology, Peking University

Neural Information Processing Systems

Sampling from a high-dimensional distribution serves as one of the key components in statistics, machine learning, and scientific computing, and constitutes the foundation of the fields including Bayesian statistics and generative models [Liu and Liu, 2001, Brooks et al., 2011, Song et al.,


Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee Y uanshi Liu, Cong Fang, Tong Zhang School of Intelligence Science and Technology, Peking University

Neural Information Processing Systems

Sampling from a high-dimensional distribution serves as one of the key components in statistics, machine learning, and scientific computing, and constitutes the foundation of the fields including Bayesian statistics and generative models [Liu and Liu, 2001, Brooks et al., 2011, Song et al.,


How Sparse Can We Prune A Deep Network: A Fundamental Limit Perspective

Neural Information Processing Systems

Network pruning is a commonly used measure to alleviate the storage and computational burden of deep neural networks. However, the fundamental limit of network pruning is still lacking. To close the gap, in this work we'll take a first-principles approach, i.e. we'll directly impose the sparsity constraint on the loss function and leverage the framework of statistical dimension in convex geometry, thus enabling us to characterize the sharp phase transition point, which can be regarded as the fundamental limit of the pruning ratio. Through this limit, we're able to identify two key factors that determine the pruning ratio limit, namely, weight magnitude and network sharpness .